World Models

Can agents learn inside of their own dreams?

Abstract

We explore building generative neural network models of popular reinforcement learning environments[1]. Our world model can be trained quickly in an unsupervised manner to learn a compressed spatial and temporal representation of the environment. By using features extracted from the world model as inputs to an agent, we can train a very compact and simple policy that can solve the required task. We can even train our agent entirely inside of its own hallucinated dream generated by its world model, and transfer this policy back into the actual environment.


Introduction

A World Model, from Scott McCloud’s Understanding Comics.[2, 3]

Humans develop a mental model of the world based on what they are able to perceive with their limited senses. The decisions and actions we make are based on this internal model. Jay Wright Forrester, the father of system dynamics, defined a mental model as:

The image of the world around us, which we carry in our head, is just a model. Nobody in his head imagines all the world, government or country. He has only selected concepts, and relationships between them, and uses those to represent the real system.[4]

To handle the vast amount of information that flows through our daily lives, our brain learns an abstract representation of both spatial and temporal aspects of this information. We are able to observe a scene and remember an abstract description thereof [5, 6]. Evidence also suggests that what we perceive at any given moment is governed by our brain’s prediction of the future based on our internal model[7, 8].

What we see is based on our brain’s prediction of the future.[9, 10, 11]

One way of understanding the predictive model in our brains is that it might not be about just predicting the future in general, but predicting future sensory data given current motor actions[12, 13]. We are able to instinctively act on this predictive model and perform fast reflexive behaviours when we face danger[14], without the need to consciously plan out a course of action.

Take baseball for example[15]. A baseball batter has milliseconds to decide how they should swing the bat — shorter than the time it takes for visual signals from our eyes to reach our brain. The reason we are able to hit a 115mph fastball is due to our ability to instinctively predict when and where the ball will go. For professional players, this all happens subconsciously. Their muscles reflexively swing the bat at the right time and location in line with their internal models’ prediction[8]. They can quickly act on their predictions of the future without the need to consciously roll out possible future scenarios to form a plan[16].

In many reinforcement learning (RL)[17, 18, 19] problems, an artificial agent also benefits from having a good representation of past and present states, and a good predictive model of the future [20, 21], preferably a powerful predictive model implemented on a general purpose computer such as a recurrent neural network (RNN) [22, 23, 24].

Large RNNs are highly expressive models that can learn rich spatial and temporal representations of data. However, many model-free RL methods in the literature often only use small neural networks with few parameters. The RL algorithm is often bottlenecked by the credit assignment problem1, which makes it hard for traditional RL algorithms to learn millions of weights of a large model, hence in practice, smaller networks are used as they iterate faster to a good policy during training.

Ideally, we would like to be able to efficiently train large RNN-based agents. The backpropagation algorithm [25, 26, 27] can be used to train large neural networks efficiently. In this work we look at training a large neural network2 to tackle RL tasks, by dividing the agent into a large world model and a small controller model. We first train a large neural network to learn a model of the agent’s world in an unsupervised manner, and then train the smaller controller model to learn to perform a task using this world model. A small controller lets the training algorithm focus on the credit assignment problem on a small search space, while not sacrificing capacity and expressiveness via the larger world model. By training the agent through the lens of its world model, we show that it can learn a highly compact policy to perform its task.

Although there is a large body of research relating to model-based reinforcement learning, this article is not meant to be a review [28, 29] of the current state of the field. Instead, the goal of this article is to distill several key concepts from a series of papers 1990—2015 on combinations of RNN-based world models and controllers [22, 23, 24, 30, 31]. We will also discuss other related works in the literature that share similar ideas of learning a world model and training an agent using this model.

In this article, we present a simplified framework that we can use to experimentally demonstrate some of the key concepts from these papers, and also suggest further insights to effectively apply these ideas to various RL environments. We use similar terminology and notation as Learning to Think[31] when describing our methodology and experiments.


Agent Model

We present a simple model inspired by our own cognitive system. In this model, our agent has a visual sensory component that compresses what it sees into a small representative code. It also has a memory component that makes predictions about future codes based on historical information. Finally, our agent has a decision-making component that decides what actions to take based only on the representations created by its vision and memory components.

Our agent consists of three components that work closely together: Vision (V), Memory (M), and Controller (C).

VAE (V) Model

The environment provides our agent with a high dimensional input observation at each time step. This input is usually a 2D image frame in a video sequence. The role of the V model is to learn an abstract, compressed representation of each observed input frame.


Flow diagram of a Variational Autoencoder.[32, 33]

We use a Variational Autoencoder (VAE)[32, 33] as the V model in our experiments. In the following demo, we show how the V model compresses each frame it receives at time step tt into a low dimensional latent vector ztz_t. This compressed representation can be used to reconstruct the original image.

Interactive Demo
A VAE trained on screenshots obtained from a VizDoom[34, 35] environment. You can load randomly chosen screenshots to be encoded into a small latent vector zz, which is used to reconstruct the original screenshot. You can also experiment with adjusting the values of the zz vector using the slider bars to see how it affects the reconstruction, or randomize zz to observe the space of possible screenshots learned by our VAE.

MDN-RNN (M) Model

While it is the role of the V model to compress what the agent sees at each time frame, we also want to compress what happens over time. For this purpose, the role of the M model is to predict the future. The M model serves as a predictive model of the future zz vectors that V is expected to produce. Because many complex environments are stochastic in nature, we train our RNN to output a probability density function p(z)p(z) instead of a deterministic prediction zz.

RNN with a Mixture Density Network output layer. The MDN outputs the parameters of a mixture of Gaussian distribution used to sample a prediction of the next latent vector zz.

In our approach, we approximate p(z)p(z) as a mixture of Gaussian distribution, and train the RNN to output the probability distribution of the next latent vector zt+1z_{t+1} given the current and past information made available to it.

More specifically, the RNN will model P(zt+1at,zt,ht)P(z_{t+1} \; | \; a_t, z_t, h_t), where ata_t is the action taken at time tt and hth_t is the hidden state of the RNN at time tt. During sampling, we can adjust a temperature parameter τ\tau to control model uncertainty, as done in [36] — we will find adjusting τ\tau to be useful for training our controller later on.

SketchRNN[37] is an example of a MDN-RNN used to predict the next pen strokes of a sketch drawing. We use a similar model to predict the next latent vector zz.

This approach is known as a Mixture Density Network[38, 39] combined with a RNN (MDN-RNN)[40, 41], and has been used successfully in the past for sequence generation problems such as generating handwriting[40, 42] and sketches[36].

Controller (C) Model

The Controller (C) model is responsible for determining the course of actions to take in order to maximize the expected cumulative reward of the agent during a rollout in the environment. In our experiments, we deliberately make C as simple and small as possible, and trained separately from V and M, so that most of our agent’s complexity resides in the world model (V and M).

C is a simple single layer linear model that maps ztz_t and hth_t directly to action ata_t at each time step:

at=Wc[ztht]+bca_t = W_c \; [z_t \; h_t]\; + b_c

In this linear model, WcW_c and bcb_c are the weight matrix and bias vector that maps the concatenated input vector [ztht][z_t \; h_t] to the output action vector ata_t.3

Putting Everything Together

The following flow diagram illustrates how V, M, and C interacts with the environment:

Flow diagram of our Agent model. The raw observation is first processed by V at each timestep tt to produce ztz_t. The inputs into C is this latent vector ztz_t concatenated with M’s hidden state hth_t at each time step. C will then output an action vector ata_t for motor control. M will then take the current ztz_t and action ata_t as an input to update its own hidden state to produce ht+1h_{t+1} to be used at time t+1t+1.

Below is the pseudocode for how our agent model is used in the OpenAI Gym [1] environment. Running this function on a given controller C will return the cumulative reward during a rollout in the environment.

def rollout(controller):
  ''' env, rnn, vae are '''
  ''' global variables  '''
  obs = env.reset()
  h = rnn.initial_state()
  done = False
  cumulative_reward = 0
  while not done:
    z = vae.encode(obs)
    a = controller.action([z, h])
    obs, reward, done = env.step(a)
    cumulative_reward += reward
    h = rnn.forward([a, z, h])
  return cumulative_reward

This minimal design for C also offers important practical benefits. Advances in deep learning provided us with the tools to train large, sophisticated models efficiently, provided we can define a well-behaved, differentiable loss function. Our V and M models are designed to be trained efficiently with the backpropagation algorithm using modern GPU accelerators, so we would like most of the model’s complexity, and model parameters to reside in V and M. The number of parameters of C, a linear model, is minimal in comparison. This choice allows us to explore more unconventional ways to train C — for example, even using evolution strategies (ES)[43, 44, 45] to tackle more challenging RL tasks where the credit assignment problem is difficult.

To optimize the parameters of C, we chose the Covariance-Matrix Adaptation Evolution Strategy (CMA-ES)[46, 47, 45] as our optimization algorithm since it is known to work well for solution spaces of up to a few thousand parameters. We evolved parameters of C on a single machine with multiple CPU cores running multiple rollouts of the environment in parallel.

For more specific information about the models, training procedures, and environments used in our experiments, please refer to the Appendix.


Car Racing Experiment: World Model for Feature Extraction

A predictive world model can help us extract useful representations of space and time. By using these features as inputs of a controller, we can train a compact and minimal controller to perform a continuous control task, such as learning to drive from pixel inputs for a top-down car racing environment[48]. In this section, we describe how we can train the Agent model described earlier to solve this car racing task. To our knowledge, our agent is the first known solution to achieve the required score to solve this task.4

Our agent learning to navigate a top-down racing environment.[48]

In this environment, the tracks are randomly generated for each trial, and our agent is rewarded for visiting as many tiles as possible in the least amount of time. The agent controls three continuous actions: steering left/right, acceleration, and brake.

To train our V model, we first collect a dataset of 10,000 random rollouts in the environment. We have first an agent acting randomly to explore the environment multiple times, and record the random actions ata_t taken and the resulting observations from the environment.5 We use this dataset to train V to learn a latent space of each frame observed. We train our VAE to encode each frame into low dimensional latent vector zz by minimizing the difference between a given frame and the reconstructed version of the frame produced by the decoder from zz. The following demo shows the results of our VAE after training:

Interactive Demo
Our VAE trained on observations from CarRacing-v0[48]. Despite losing details during this lossy compression process, latent vector zz captures the essence of each 64x64px image frame.

We can now use our trained V model to pre-process each frame at time tt into ztz_t to train our M model. Using this pre-processed data, along with the recorded random actions ata_t taken, our MDN-RNN can now be trained to model P(zt+1at,zt,ht)P(z_{t+1} \; | \; a_t, z_t, h_t) as a mixture of Gaussians.6

In this experiment, the world model (V and M) has no knowledge about the actual reward signals in the environment. Its task is simply to compress and predict the sequence of image frames observed. Only the Controller (C) Model has access to the reward information from the environment. Since the there are a mere 867 parameters inside the linear controller model, evolutionary algorithms such as CMA-ES are well suited for this optimization task.

The figure below compares actual the observation given to the agent and the observation captured by the world model. We can use the VAE to reconstruct each frame using ztz_t at each time step to visualize the quality of the information the agent actually sees during a rollout:

Actual observations from the environment.
What gets encoded into ztz_t.

Procedure

To summarize the Car Racing experiment, below are the steps taken:

  1. Collect 10,000 rollouts from a random policy.
  2. Train VAE (V) to encode frames to latent vector zR32z \in \mathcal{R}^{32}.
  3. Train MDN-RNN (M) to model P(zt+1at,zt,ht)P(z_{t+1} \; | \; a_t, z_t, h_t).
  4. Define Controller (C) as at=Wc[ztht]+bca_t = W_c \; [z_t \; h_t]\; + \; b_c.
  5. Use CMA-ES to solve for a WcW_c and bcb_c that maximizes the expected cumulative reward.
Model Parameter Count
VAE 4,348,547
MDN-RNN 422,368
Controller 867

Car Racing Experiment Results

V Model Only

Training an agent to drive is not a difficult task if we have a good representation of the observation. Previous works[49, 50, 51] have shown that with a good set of hand-engineered information about the observation, such as LIDAR information, angles, positions and velocities, one can easily train a small feed-forward network to take this hand-engineered input and output a satisfactory navigation policy. For this reason, we first want to test our agent by handicapping C only to have access to V but not M, so we define our controller as at=Wczt+bca_t = W_c \; z_t \;+ \; b_c.

Limiting our controller to see only ztz_t, but not hth_t results in wobbly and unstable driving behaviours.

Although the agent is still able to navigate the race track in this setting, we notice it wobbles around and misses the tracks on sharper corners. This handicapped agent achieved an average score of 632 ±\pm 251 over 100 random trials, in line with the performance of other agents on OpenAI Gym’s leaderboard[48] and traditional Deep RL methods such as A3C[52, 53]. Adding a hidden layer to the C model’s policy network improves the results to 788 ±\pm 141, not enough to solve the environment.

Full World Model (V and M)

The representation ztz_t provided by our V model only captures a representation at a moment in time and doesn’t have much predictive power. In contrast, M is trained to do one thing, and to do it really well, which is to predict zt+1z_{t+1}. Since M’s prediction of zt+1z_{t+1} are produced from the RNN’s hidden state hth_t at time tt, this vector is a good candidate for the set of learned features we can give to our agent. Combining ztz_t with hth_t gives our controller C a good representation of both the current observation, and what to expect in the future.

Driving is more stable if we give our controller access to both ztz_t and hth_t.

Indeed, we see that allowing the agent to access the both ztz_t and hth_t greatly improves its driving capability. The driving is more stable, and the agent is able to seemingly attack the sharp corners effectively. Furthermore, we see that in making these fast reflexive driving decisions during a car race, the agent does not need to plan ahead and roll out hypothetical scenarios of the future. Since hth_t contain information about the probability distribution of the future, the agent can just query the RNN instinctively to guide its action decisions. Like a seasoned Formula One driver or the baseball player discussed earlier, the agent can instinctively predict when and where to navigate in the heat of the moment.

Method \;\; Average Score over 100 Random Tracks \;\;
DQN[54]
343 ±\pm 18
A3C (continuous)[53]
591 ±\pm 45
A3C (discrete)[52]
652 ±\pm 10
ceobillionaire’s algorithm (unpublished)[48]
838 ±\pm 11
V model only, zz input
632 ±\pm 251
V model only, zz input with a hidden layer
788 ±\pm 141
Full World Model, zz and hh
906 ±\pm 21

Our agent was able to achieve a score of 906 ±\pm 21 over 100 random trials, effectively solving the task and obtaining new state of the art results. Previous attempts[52, 53] using traditional Deep RL methods obtained average scores of 591—652 range, and the best reported solution on the leaderboard [48] obtained an average score of 838 ±\pm 11 over 100 random consecutive trials. Traditional Deep RL methods often require pre-processing of each frame, such as employing edge-detection[53], in addition to stacking a few recent frames[52, 53] into the input. In contrast, our world model takes in a stream of raw RGB pixel images and directly learns a spatial-temporal representation. To our knowledge, our method is the first reported solution to solve this task.


Car Racing Dreams

Since our world model is able to model the future, we are able to have it hallucinate hypothetical car racing scenarios on its own. We can ask it to produce the probability distribution of zt+1z_{t+1} given the current states, sample a zt+1z_{t+1} and use this sample as the real observation. We can put our trained C back into this hallucinated environment generated by M. The following demo shows how our world model can be used to hallucinate the car racing environment:

Interactive Demo
Our agent driving inside its own dream. Here, we deploy our trained policy into a fake environment generated by the MDN-RNN, and rendered using the VAE’s decoder. You can override the agent’s actions by tapping on the left or right side of the screen, or by hitting arrow keys (left/right to steer, up/down to accelerate or brake). The uncertainty level of the environment can be adjusted by changing τ\tau using the slider on the bottom right.

We have just seen that a policy learned in the real environment appears to somewhat function inside of the dream environment. This begs the question — can we train our agent to learn inside of its own dream, and transfer this policy back to the actual environment?


VizDoom Experiment: Learning Inside of a Dream

If our world model is sufficiently accurate for its purpose, and complete enough for the problem at hand, we should be able to substitute the actual environment with this world model. After all, our agent does not directly observe the reality, but only sees what the world model lets it see. In this experiment, we train an agent inside the hallucination generated by its world model trained to mimic a VizDoom[34] environment.

Our final agent solving the VizDoom: Take Cover environment.[34, 35]

The agent must learn to avoid fireballs shot by monsters from the other side of the room with the sole intent of killing the agent. There are no explicit rewards in this environment, so to mimic natural selection, the cumulative reward can be defined to be the number of time steps the agent manages to stay alive in a rollout. Each rollout in the environment runs for a maximum of 2100 frames (\sim 60 seconds), and the task is considered solved when the average survival time over 100 consecutive rollouts is greater than 750 frames (\sim 20 seconds)[35].


Procedure

Our VizDoom experiment is largely the same as the Car Racing task, except for a few key differences. In the Car Racing task, M is only trained to model the next ztz_{t}. Since we want to build a world model we can train our agent in, our M model here will also predict whether the agent dies in the next frame (as a binary event donetdone_t), in addition to the next frame ztz_t.

Since the M model can predict the donedone state in addition to the next observation, we now have all of the ingredients needed to make a full RL environment. We first build an OpenAI Gym environment interface by wrapping a gym.Env[1] interface over our M if it were a real Gym environment, and then train our agent in this virtual environment instead of using the actual environment.

In this simulation, we don’t need the V model to encode any real pixel frames during the hallucination process, so our agent will therefore only train entirely in a latent space environment. This has many advantages that will be discussed later on.

This virtual environment has an identical interface to the real environment, so after the agent learns a satisfactory policy in the virtual environment, we can easily deploy this policy back into the actual environment to see how well the policy transfers over.

To summarize the Take Cover experiment, below are the steps taken:

  1. Collect 10,000 rollouts from a random policy.
  2. Train VAE (V) to encode frames to latent vector zR64z \in \mathcal{R}^{64}, and use V to convert the images collected from (1) into latent space representation.
  3. Train MDN-RNN (M) to model P(zt+1,donet+1at,zt,ht)P(z_{t+1}, done_{t+1} \; | \; a_t, z_t, h_t).
  4. Define Controller (C) as at=Wc[ztht]a_t = W_c \; [z_t \; h_t].
  5. Use CMA-ES to solve for a WcW_c that maximizes the expected survival time inside the virtual environment.
  6. Use learned policy from (5) on actual Gym environment.
Model Parameter Count
VAE 4,446,915
MDN-RNN 1,678,785
Controller 1,088

Training Inside of the Dream

After some training, our controller learns to navigate around the dream environment and escape from deadly fireballs shot by monsters generated by the M model. Our agent achieved a score in this virtual environment of \sim 900 frames.

The following demo shows how our agent navigates inside its own dream. The M model learns to generate monsters that shoot fireballs at the direction of the agent, while the C model discovers a policy to avoid these hallucinated fireballs. Here, the V model is only used to decode the latent vectors ztz_t produced by M into a sequence of pixel images we can see:

Interactive Demo
Our agent discovers a policy to avoid hallucinated fireballs. In this demo, you can override the agent’s action by using the left/right keys on your keyboard, or by tapping on either side of the screen. You can also control the uncertainty level of the environment by adjusting the temperature parameter using slider on the bottom right.

Here, our RNN-based world model is trained to mimic a complete game environment designed by human programmers. By learning only from raw image data collected from random episodes, it learns how to simulate the essential aspects of the game — such as the game logic, enemy behaviour, physics, and the 3D graphics rendering.

For instance, if the agent selects the left action, the M model learns to move the agent to the left and adjust its internal representation of the game states accordingly. It also learns to block the agent from moving beyond the walls on both sides of the level if the agent attempts to move too far in either direction. Occasionally, the M model needs to keep track of multiple fireballs being shot from several different monsters and coherently move them along in their intended directions. It must also detect whether the agent has been killed by one of these fireballs.

Unlike the actual game environment, however, we note that it is possible to add extra uncertainty into the virtual environment, thus making the game more challenging in the dream environment. We can do this by increasing the temperature τ\tau parameter during the sampling process of zt+1z_{t+1}, as done in [36]. By increasing the uncertainty, our dream environment becomes more difficult compared to the actual environment. The fireballs may move more randomly in a less predictable path compared to the actual game. Sometimes the agent may even die due to sheer misfortune, without explanation.

We find agents that perform well in higher temperature settings generally perform better in the normal setting. In fact, increasing τ\tau helps prevent our controller from taking advantage of the imperfections of our world model — we will discuss this in more depth later on.


Transfer Policy to Actual Environment

Deploying our policy learned in the dream RNN environment back into the actual VizDoom environment.

We took the agent trained in the virtual environment and tested its performance on the original VizDoom scenario. The score over 100 random consecutive trials is \sim 1100 frames, far beyond the required score of 750, and also much higher than the score obtained inside the more difficult virtual environment.7

Cropped 64x64px frame of environment.
Reconstruction from latent vector.

We see that even though the V model is not able to capture all of the details of each frame correctly, for instance, getting the number of monsters correct, the agent is still able to use the learned policy to navigate in the real environment. The virtual environment also did not keep track of a clear number of monsters in the first place, and an agent that is able to survive the noisier and uncertain virtual nightmare environment will thrive in this clean, noiseless environment.


Cheating the World Model

In our childhood, we may have encountered ways to exploit video games in ways that were not intended by the original game designer[55]. Players discover ways to collect unlimited lives or health, and by taking advantage of these exploits, they can easily complete an otherwise difficult game. However, in the process of doing so, they may have forfeited the opportunity to learn the skill required to master the game as intended by the game designer.

Agent discovers an adversarial policy that fools the monsters inside the world model into never launching any fireballs during some rollouts.

For instance, in our initial experiments, we noticed that our agent discovered an adversarial policy to move around in such a way so that the monsters in this virtual environment governed by the M model never shoots a single fireball in some rollouts. Even when there are signs of a fireball forming, the agent will move in a way to extinguish the fireballs magically as if it has superpowers in the environment.

Because our world model is only an approximate probabilistic model of the environment, it will occasionally generate trajectories that do not follow the laws governing the actual environment. As we saw previously, even the number of monsters on the other side of the room in the actual environment is not exactly reproduced by the world model. Like a child who learns that objects in the air usually fall to the ground, the child might also imagine unrealistic superheroes who fly across the sky. For this reason, our world model will be exploitable by the controller, even if in the actual environment such exploits do not exist.

And since we are using the M model to generate a virtual dream environment for our agent, we are also giving the controller access to all of the hidden states of M. This is essentially granting our agent access to all of the internal states and memory of the game engine of the game it is playing. Therefore our agent can efficiently explore ways to directly manipulate the hidden states of the game engine in its quest to maximize its expected cumulative reward. The weakness of this approach of learning a policy inside a learned dynamics model is that our agent can easily find an adversarial policy that can fool our dynamics model — it’ll find a policy that looks good under our dynamics model, but will fail in the actual environment, usually because it visits states where the model is wrong because they are away from the training distribution.

This weakness could be the reason that many previous works that learn dynamics models of RL environments but don’t actually use those models to fully replace the actual environments [56, 57]. Like in the M model proposed in 1990 [22, 23, 24], the dynamics model is a deterministic differentiable model, making the model easy for the agent to exploit if it is not perfect. Using Bayesian models (as in PILCO[58]) helps to address this issue with the uncertainty estimates to some extent, however, they don’t fully solve the problem. Recent work[59] combines the model-based approach with traditional model-free RL training by first initializing the policy network with the learned policy, but must subsequently rely on a model-free method to fine-tune this policy in the actual environment.8

To make it more difficult for our C model to exploit deficiencies in the M model, we chose to use the MDN-RNN as the dynamics model, which models the distribution of possible outcomes in the actual environment, rather than merely predicting a deterministic future. Even if the actual environment is deterministic, the MDN-RNN would in effect approximate it as a stochastic environment. This has the advantage of allowing us to train our C model inside a more stochastic version of any environment — we can simply adjust the temperature τ\tau parameter to control the amount of randomness in the M model, hence controlling the tradeoff between realism and exploitability.

Using a mixture of Gaussian model may seem like overkill given that the latent space encoded with the VAE model is just a diagonal Gaussian. However, the discrete modes in a mixture density model is useful for environments with random discrete events, such as whether a monster decides to shoot a fireball or stay put. While a Gaussian might be sufficient to encode individual frames, a RNN with a mixture density output layer makes it easier to model the logic behind a more complicated environment with discrete random states.

For instance, if we set the temperature parameter to a very low value of τ=0.1\tau=0.1, effectively training our C model with an M model that is almost identical to a deterministic LSTM, the monsters inside this dream environment fail to shoot fireballs, no matter what the agent does, due to mode collapse. The M model is not able to jump to another mode in the mixture of Gaussian model where fireballs are formed and shot. Whatever policy trained in this dream will get a perfect score of 2100 most of the time, but will obviously fail when unleashed into the harsh reality of the actual world, underperforming even a random policy.

In the following demo, we show that even low values of τ0.5\tau \sim 0.5 make it difficult for the MDN-RNN to generate fireballs:

Interactive Demo
For low τ\tau settings, monsters in the M model rarely shoot fireballs. Even when you try to increase τ\tau to 1.0 using the slider bar, the agent will occasionally extinguish fireballs still being formed, by fooling M.

Note again, however, that the simpler and more robust approach in Learning to Think [31] does not insist on using M for step by step planning. Instead, C can learn to use M’s subroutines (parts of M’s weight matrix) for arbitrary computational purposes but can also learn to ignore M when M is useless and when ignoring M yields better performance. Nevertheless, at least in our present C—M variant, M’s predictions are essential for teaching C, more like in some of the early C—M systems[22, 23, 24], but combined with evolution or black box optimization.

By making the temperature τ\tau an adjustable parameter of the M model, we can see the effect of training the C model on hallucinated virtual environments with different levels of uncertainty, and see how well they transfer over to the actual environment. We experimented with varying the temperature in the virtual environment and observing the resulting average score over 100 random rollouts in the actual environment after training the agent inside the virtual environment with a given temperature:

\;\;Temperature\;\;
\;\; Score in Virtual Environment
\;\;Score in Actual Environment\;\;
0.10
2086 ±\pm 140
193 ±\pm 58
0.50
2060 ±\pm 277
196 ±\pm 50
1.00
1145 ±\pm 690
868 ±\pm 511
1.15
918 ±\pm 546
1092 ±\pm 556
1.30
732 ±\pm 269
753 ±\pm 139
Random Policy Baseline
N/A
210 ±\pm 108
Gym Leaderboard[35]
N/A
820 ±\pm 58

We see that while increasing the temperature of the M model makes it more difficult for the C model to find adversarial policies, increasing it too much will make the virtual environment too difficult for the agent to learn anything, hence in practice it is a hyperparameter we can tune. The temperature also affects the types of strategies the agent discovers. For example, although the best score obtained is 1092 ±\pm 556 over 100 random trials using a temperature of 1.15, increasing τ\tau a notch to 1.30 results in a lower score but at the same time a less risky strategy with a lower variance of returns. For comparison, the best score on the OpenAI Gym leaderboard[35] is 820 ±\pm 58.


Iterative Training Procedure

In our experiments, the tasks are relatively simple, so a reasonable world model can be trained using a dataset collected from a random policy. But what if our environments become more sophisticated? In any difficult environment, only parts of the world are made available to the agent only after it learns how to strategically navigate through its world.

For more complicated tasks, an iterative training procedure is required. We need our agent to be able to explore its world, and constantly collect new observations so that its world model can be improved and refined over time. An iterative training procedure, adapted from Learning To Think[31] is as follows:

  1. Initialize M, C with random model parameters.
  2. Rollout to actual environment NN times. Agent may learn during rollouts. Save all actions ata_t and observations xtx_t during rollouts to storage device.
  3. Train M to model P(xt+1,rt+1,at+1,donet+1xt,at,ht)P(x_{t+1}, r_{t+1}, a_{t+1}, done_{t+1} \; | \; x_t, a_t, h_t).
  4. Go back to (2) if task has not been completed.

We have shown that one iteration of this training loop was enough to solve simple tasks. For more difficult tasks, we need our controller in Step 2 to actively explore parts of the environment that is beneficial to improve its world model. An exciting research direction is to look at ways to incorporate artificial curiosity and intrinsic motivation[60, 61, 62, 63, 64] and information seeking[65, 66] abilities in an agent to encourage novel exploration[67]. In particular, we can augment the reward function based on improvement in compression quality[60, 61, 62, 31].

In the present approach, since M is a MDN-RNN that models a probability distribution for the next frame, if it does a poor job, then it means the agent has encountered parts of the world that it is not familiar with. Therefore we can adapt and reuse M’s training loss function to encourage curiosity. By flipping the sign of M’s loss function in the actual environment, the agent will be encouraged to explore parts of the world that it is not familiar with. The new data it collects may improve the world model.

The iterative training procedure requires the M model to not only predict the next observation xx and donedone, but also predict the action and reward for the next time step. This may be required for more difficult tasks. For instance, if our agent needs to learn complex motor skills to walk around its environment, the world model will learn to imitate its own C model that has already learned to walk. After difficult motor skills, such as walking, is absorbed into a large world model with lots of capacity, the smaller C model can rely on the motor skills already absorbed by the world model and focus on learning more higher level skills to navigate itself using the motor skills it had already learned.9


How information becomes memory.[68]

An interesting connection to the neuroscience literature is the work on hippocampal replay that examines how the brain replays recent experiences when an animal rests or sleeps. Replaying recent experiences plays an important role in memory consolidation[69] — where hippocampus-dependent memories become independent of the hippocampus over a period of time[68]. As Foster[69] puts it, replay is “less like dreaming and more like thought”. We invite readers to read Replay Comes of Age[69] for a detailed overview of replay from a neuroscience perspective with connections to theoretical reinforcement learning.

Iterative training could allow the C—M model to develop a natural hierarchical way to learn. Recent works about self-play in RL[70, 71, 72] and PowerPlay[73, 74] also explores methods that lead to a natural curriculum learning[75], and we feel this is one of the more exciting research areas of reinforcement learning.


Related Work

There is extensive literature on learning a dynamics model, and using this model to train a policy. Many concepts first explored in the 1980s for feed-forward neural networks (FNNs)[20, 76, 77, 78, 79] and in the 1990s for RNNs[22, 23, 24, 30] laid some of the groundwork for Learning to Think[31]. The more recent PILCO[58, 80, 81] is a probabilistic model-based search policy method designed to solve difficult control problems. Using data collected from the environment, PILCO uses a Gaussian process (GP) model to learn the system dynamics, and then uses this model to sample many trajectories in order to train a controller to perform a desired task, such as swinging up a pendulum, or riding a unicycle.

While Gaussian processes work well with a small set of low dimensional data, their computational complexity makes them difficult to scale up to model a large history of high dimensional observations. Other recent works[82, 83] use Bayesian neural networks instead of GPs to learn a dynamics model. These methods have demonstrated promising results on challenging control tasks[84], where the states are known and well defined, and the observation is relatively low dimensional. Here we are interested in modelling dynamics observed from high dimensional visual data where our input is a sequence of raw pixel frames.

In robotic control applications, the ability to learn the dynamics of a system from observing only camera-based video inputs is a challenging but important problem. Early work on RL for active vision trained an FNN to take the current image frame of a video sequence to predict the next frame[85], and use this predictive model to train a fovea-shifting control network trying to find targets in a visual scene. To get around the difficulty of training a dynamical model to learn directly from high-dimensional pixel images, researchers explored using neural networks to first learn a compressed representation of the video frames. Recent work along these lines[86, 87] was able to train controllers using the bottleneck hidden layer of an autoencoder as low-dimensional feature vectors to control a pendulum from pixel inputs. Learning a model of the dynamics from a compressed latent space enable RL algorithms to be much more data-efficient[88, 89, 90]. We invite readers to watch Finn’s lecture on Model-Based RL[90] to learn more.

Video game environments are also popular in model-based RL research as a testbed for new ideas. Guzdial et al.[91] used a feed-forward convolutional neural network (CNN) to learn a forward simulation model of a video game. Learning to predict how different actions affect future states in the environment is useful for game-play agents, since if our agent can predict what happens in the future given its current state and action, it can simply select the best action that suits its goal. This has been demonstrated not only in early work[79, 85] (when compute was a million times more expensive than today) but also in recent studies[92] on several competitive VizDoom[34] environments.

The works mentioned above use FNNs to predict the next video frame. We may want to use models that can capture longer term time dependencies. RNNs are powerful models suitable for sequence modelling[40]. In a lecture called Hallucination with RNNs[93], Graves demonstrated the ability of RNNs to learn a probabilistic model of Atari game environments. He trained RNNs to learn the structure of such a game and then showed that they can hallucinate similar game levels on its own.

A controller with internal RNN model of the world.[22]

Using RNNs to develop internal models to reason about the future has been explored as early as 1990 in a paper called Making the World Differentiable[22], and then further explored in [23, 24, 30]. A more recent paper called Learning to Think[31] presented a unifying framework for building a RNN-based general problem solver that can learn a world model of its environment and also learn to reason about the future using this model. Subsequent works have used RNN-based models to generate many frames into the future[57, 56, 94], and also as an internal model to reason about the future[95, 96, 97].

In this work, we used evolution strategies (ES) to train our controller, as it offers many benefits. For instance, we only need to provide the optimizer with the final cumulative reward, rather than the entire history. ES is also easy to parallelize — we can launch many instances of rollout with different solutions to many workers and quickly compute a set of cumulative rewards in parallel. Recent works [98, 99, 100, 101] have confirmed that ES is a viable alternative to traditional Deep RL methods on many strong baseline tasks.

Before the popularity of Deep RL methods[102], evolution-based algorithms have been shown to be effective at finding solutions for RL tasks[103, 104, 105, 106, 107, 108]. Evolution-based algorithms have even been able to solve difficult RL tasks from high dimensional pixel inputs[109, 110, 111]. More recent works[112] also combine VAE and ES, which is similar to our approach.


Discussion

We have demonstrated the possibility of training an agent to perform tasks entirely inside of its simulated latent space dream world. This approach offers many practical benefits. For instance, running computationally intensive game engines require using heavy compute resources for rendering the game states into image frames, or calculating physics not immediately relevant to the game. We may not want to waste cycles training an agent in the actual environment, but instead train the agent as many times as we want inside its simulated environment. Training agents in the real world is even more expensive, so world models that are trained incrementally to simulate reality will make it easier to experiment with different approaches for training our agents.

Furthermore, we can take advantage of deep learning frameworks to accelerate our world model simulations using GPUs in a distributed environment. The benefit of implementing the world model as a fully differentiable recurrent computation graph also means that we may be able to train our agents in the dream directly using the backpropagation algorithm to fine-tune its policy to maximize an objective function [22, 23, 24].

The choice of using a VAE for the V model and training it as a standalone model also has its limitations, since it may encode parts of the observations that are not relevant to a task. After all, unsupervised learning cannot, by definition, know what will be useful for the task at hand. For instance, it reproduced unimportant detailed brick tile patterns on the side walls in the Doom environment, but failed to reproduce task-relevant tiles on the road in the Car Racing environment. By training together with an M model that predicts rewards, the VAE may learn to focus on task-relevant areas of the image, but the tradeoff here is that we may not be able to reuse the VAE effectively for new tasks without retraining.

Learning task-relevant features has connections to neuroscience as well. Primary sensory neurons are released from inhibition when rewards are received, which suggests that they generally learn task-relevant features, rather than just any features, at least in adulthood[113].

Future work might explore the use of an unsupervised segmentation layer like in [114] to extract better feature representations that might be more useful and interpretable compared to the representations learned using a VAE.

Another concern is the limited capacity of our world model. While modern storage devices can store large amounts of historical data generated using the iterative training procedure, our LSTM[115, 116]-based world model may not be able to store all of the recorded information inside its weight connections. While the human brain can hold decades and even centuries of memories to some resolution[117], our neural networks trained with backpropagation have more limited capacity and suffer from issues such as catastrophic forgetting[118, 119, 120]. Future work may explore replacing the small MDN-RNN network with higher capacity models [121, 122, 123, 124, 125], or incorporating an external memory module[126], if we want our agent to learn to explore more complicated worlds.

Ancient drawing (1990) of a RNN-based controller interacting with an environment.[22]

Like early RNN-based C—M systems [22, 23, 24, 30], ours simulates possible futures time step by time step, without profiting from human-like hierarchical planning or abstract reasoning, which often ignores irrelevant spatial-temporal details. However, the more general Learning To Think[31] approach is not limited to this rather naive approach. Instead it allows a recurrent C to learn to address “subroutines” of the recurrent M, and reuse them for problem solving in arbitrary computable ways, e.g., through hierarchical planning or other kinds of exploiting parts of M’s program-like weight matrix. A recent One Big Net[127] extension of the C—M approach collapses C and M into a single network, and uses PowerPlay-like[73, 74] behavioural replay (where the behaviour of a teacher net is compressed into a student net[128]) to avoid forgetting old prediction and control skills when learning new ones. Experiments with those more general approaches are left for future work.

This work is meant to be a live research project and will be revised and expanded over time. This article will be the first of a series of articles exploring World Models. If you would like to discuss any issues, give feedback, or even contribute to future work, please visit the GitHub repository of this page for more information.

Acknowledgments

We would like to thank Blake Richards, Kory Mathewson, Kyle McDonald, Kai Arulkumaran, Ankur Handa, Denny Britz, Elwin Ha and Natasha Jaques for their thoughtful feedback on this article, and for offering their valuable perspectives and insights from their areas of expertise.

The interative demos in this article were all built using p5.js. Deploying all of these machine learning models in a web browser was made possible with deeplearn.js, a hardware-accelerated machine learning framework for the browser, developed by the People+AI Research Initiative (PAIR) team at Google. A special thanks goes to Nikhil Thorat and Daniel Smilkov for their support.

We would like to thank Chris Olah and the rest of the Distill editorial team for their valuable feedback and generous editorial support, in addition to supporting the use of their distill.pub technology.

We would to extend our thanks to Alex Graves, Douglas Eck, Mike Schuster, Rajat Monga, Vincent Vanhoucke, Jeff Dean and the Google Brain team for helpful feedback and for encouraging us to explore this area of research.

Any errors here are our own and do not reflect opinions of our proofreaders and colleagues. If you see mistakes or want to suggest changes, feel free to contribute feedback by participating in the discussion forum for this article.

The experiments in this article were performed on both a P100 GPU and a 64-core CPU Ubuntu Linux virtual machine provided by Google Cloud Platform, using TensorFlow and OpenAI Gym.

Citation

For attribution in academic contexts, please cite this work as

Ha and Schmidhuber, "World Models", 2018. https://doi.org/10.5281/zenodo.1207631

BibTeX citation

@article{Ha2018WorldModels,
  author = {Ha, D. and Schmidhuber, J.},
  title  = {World Models},
  eprint = {arXiv:1803.10122},
  doi    = {10.5281/zenodo.1207631},
  url    = {https://worldmodels.github.io},
  year   = {2018}
}

Open Source Code

The code to reproduce experiments in this work, as well as IPython notebooks for training and visualizing VAE and MDN-RNN models will be made available at a later date.

Reuse

Diagrams and text are licensed under Creative Commons Attribution CC-BY 4.0 with the source available on GitHub, unless noted otherwise. The figures that have been reused from other sources don’t fall under this license and can be recognized by the citations in their caption.

Appendix

In this section we will describe in more details the models and training methods used in this work.

Variational Autoencoder

We trained a Convolutional Variational Autoencoder (ConvVAE) model as the V Model of our agent. Unlike vanilla autoencoders, enforcing a Gaussian prior over the latent vector zz also limits the amount its information capacity for compressing each frame, but this Gaussian prior also makes the world model more robust to unrealistic zz vectors generated by the M Model. As the environment may give us observations as high dimensional pixel images, we first resize each image to 64x64 pixels before as use this resized image as the V Model’s observation. Each pixel is stored as three floating point values between 0 and 1 to represent each of the RGB channels. The ConvVAE takes in this 64x64x3 input tensor and passes this data through 4 convolutional layers to encode it into low dimension vectors μ\mu and σ\sigma, each of size NzN_z. The latent vector zz is sampled from the Gaussian prior N(μ,σI)N(\mu, \sigma I). In the Car Racing task [48], NzN_z is 32 while for the Doom task NzN_z is 64. The latent vector zz is passed through 4 of deconvolution layers used to decode and reconstruct the image.

In the following diagram, we describe the shape of our tensor at each layer of the ConvVAE and also describe the details of each layer:

Convolutional Variational Autoencoder

Each convolution and deconvolution layer uses a stride of 2. The layers are indicated in the diagram in Italics as Activation-type Output Channels x Filter Size. All convolutional and deconvolutional layers use relu activations except for the output layer as we need the output to be between 0 and 1. We trained the model for 1 epoch over the data collected from a random policy, using L2L^2 distance between the input image and the reconstruction to quantify the reconstruction loss we optimize for, in addition to the KL loss.

Recurrent Neural Network

For the M Model, we use an LSTM [115] recurrent neural network combined with a Mixture Density Network[38][39] as the output layer. We use this network to model the probability distribution of the next zz in the next time step as a Mixture of Gaussian distribution. This approach is very similar to Graves’ Generating Sequences with RNNs [40] in the Unconditional Handwriting Generation section and also the decoder-only section of Sketch-RNN [36]. The only difference in the approach used is that we did not model the correlation parameter between each element of zz, and instead had the MDN-RNN output a diagonal covariance matrix of a factored Gaussian distribution.

MDN-RNN[36]

Unlike the handwriting and sketch generation works, rather than using the MDN-RNN to model the pdf of the next pen stroke, we model instead the pdf of the next latent vector zz. We would sample from this pdf at each timestep to generate the hallucinated environments. In the Doom task, we also also use the MDN-RNN to predict the probability of whether the agent has died in this frame. If that probability is above 50%, then we set done to be True in the virtual dream environment. Given that death is a low probability event at each timestep, we find the cutoff approach to more stable compared to sampling from the Bernoulli distribution.

The MDN-RNNs were trained for 20 epochs on the data collected from a random policy agent. In the Car Racing task, the LSTM used 256 hidden units, while the Doom task used 512 hidden units. In both tasks, we used 5 Gaussian mixtures and did not model the correlation ρ\rho parameter, hence zz is sampled from a factored mixture of Gaussian distribution.

When training the MDN-RNN using teacher forcing from the recorded data, we store a pre-computed set of μ\mu and σ\sigma for each of the frames, and sample an input zN(μ,σ)z \sim N(\mu, \sigma) each time we construct a training batch, to prevent overfitting our MDN-RNN to a specific sampled zz.

Controller

For both environments, we applied tanh\tanh nonlinearities to clip and bound the action space to the appropriate ranges. For instance, in the Car Racing task, the steering wheel has a range from -1 to 1, the acceleration pedal from 0 to 1, and the brakes from 0 to 1. In the Doom environment, we converted the discrete actions into a continuous action space between -1 to 1, and divided this range into thirds to indicate whether the agent is moving left, staying where it is, or moving to the right. We would give the C Model a feature vector as its input, consisting of zz and the hidden state of the MDN-RNN. In the Car Racing task, this hidden state is the output vector hh of the LSTM, while for the Doom task it is both the cell vector cc and the output vector hh of the LSTM.

Evolution Strategies

We used Covariance-Matrix Adaptation Evolution Strategy (CMA-ES) [46], an Evolution Strategy [45] to evolve the weights for our C Model. Following the approach described in Evolving Stable Strategies [100], we used a population size of 64, and had each agent perform the task 16 times with different initial random seeds. The fitness value for the agent is the average cumulative reward of the 16 random rollouts. The diagram below charts the best performer, worst performer, and mean fitness of the population of 64 agents at each generation:

Training of CarRacing-v0 [48]

Since the requirement of this environment is to have an agent achieve an average score above 900 over 100 random rollouts, we took the best performing agent at the end of every 25 generations, and tested that agent over 1024 random rollout scenarios to record this average on the red line. After 1800 generations, an agent was able to achieve an average score of 900.46 over 1024 random rollouts. We used 1024 random rollouts rather than 100 because each process of the 64 core machine had been configured to run 16 times already, effectively using a full generation of compute after every 25 generations to evaluate the best agent 1024 times. Below, we plot the results of same agent evaluated over 100 rollouts:

Histogram of cumulative rewards. Average score is 906 ± 21.

We also experimented with an agent that has access to only the zz vector from the VAE, and not letting it see the RNN’s hidden states. We tried 2 variations, where in the first variation, the C Model mapped zz directly to the action space aa. In second variation, we attempted to add a hidden layer with 40 tanhtanh activations between zz and aa, increasing the number of model parameters of the C Model to 1443, making it more comparable with the original setup.

When agent sees only ztz_t, average score is 632 ± 251.
When agent sees only ztz_t, with a hidden layer, average score is 788 ± 141.

DoomRNN

We conducted a similar experiment on the hallucinated Doom environment we called DoomRNN. Please note that we have not actually attempted to train our agent on the actual VizDoom [34] environment, and had only used VizDoom for the purpose of collecting training data using a random policy. DoomRNN is more computationally efficient compared to VizDoom as it only operates in latent space without the need to render a screenshot at each timestep, and does not require running the actual Doom game engine.

Training of DoomRNN

In the virtual DoomRNN environment we constructed, we increased the temperature slightly and used τ=1.15\tau=1.15 to make the agent learn in a more challenging environment. The best agent managed to obtain an average score of 959 over 1024 random rollouts (the highest score of the red line in the diagram). This same agent achieved an average score of 1092 ±\pm 556 over 100 random rollouts when deployed to the actual environment DoomTakeCover-v0 [35].

Histogram of timesteps survived in the actual environment over 100 consecutive trials.

Footnotes

  1. In many RL problems, the feedback (positive or negative reward) is given at end of a sequence of steps. The credit assignment problem tackles the problem of figuring out which steps caused the resulting feedback—which steps should receive credit or blame for a final result?
  2. Typical model-free RL models have in the order of 10310^3 to 10610^6 model parameters. We look at training models in the order of 10710^7 parameters, which is still rather small compared to state-of-the-art deep learning models with 10810^8 to even 10910^{9} parameters. In principle, the procedure described in this article can take advantage of these larger networks if we wanted to use them.
  3. To be clear, the prediction of zt+1z_{t+1} is not fed into the controller C directly — just the hidden state hth_t and ztz_t. This is because hth_t has all the information needed to generate the parameters of a mixture of Gaussian distribution, if we want to sample zt+1z_{t+1} to make a prediction.
  4. We find this task interesting because although it is not difficult to train an agent to wobble around randomly generated tracks and obtain a mediocre score, CarRacing-v0 defines “solving” as getting average reward of 900 over 100 consecutive trials, which means the agent can only afford very few driving mistakes.
  5. We will discuss an iterative training procedure later on for more complicated environments where a random policy is not sufficient.
  6. In principle, we can train both models together in an end-to-end manner, although we found that training each separately is more practical, and also achieves satisfactory results. Training each model only required less than an hour of computation time using a single NVIDIA P100 GPU. We can also train individual VAE and MDN-RNN models without having to exhaustively tune hyperparameters.
  7. We will discuss how this score compares to other models later on.
  8. In Learning to Think, it is acceptable that the RNN M isn’t always a reliable predictor. A (potentially evolution-based) RNN C can in principle learn to ignore a flawed M, or exploit certain useful parts of M for arbitrary computational purposes including hierarchical planning etc. This is not what we do here though — our present approach is still closer to some of the old systems, where a RNN M is used to predict and plan ahead step by step. Unlike this early work, however, we use evolution for C (like in Learning to Think) rather than traditional RL combined with RNNs, which has the advantage of both simplicity and generality.
  9. Another related connection is to muscle memory. For instance, as you learn to do something like play the piano, you no longer have to spend working memory capacity on translating individual notes to finger motions — this all becomes encoded at a subconscious level.

References

  1. OpenAI Gym[PDF]
    Brockman, G., Cheung, V., Pettersson, L., Schneider, J., Schulman, J., Tang, J. and Zaremba, W., 2016. ArXiv preprint.
  2. Understanding Comics: The Invisible Art[link]
    McCloud, S., 1993. Tundra Publishing.
  3. More thoughts from Understanding Comics by Scott McCloud[link]
    E, M., 2012. Tumblr.
  4. Counterintuitive behavior of social systems[link]
    Forrester, J.W., 1971. Technology Review.
  5. The Code for Facial Identity in the Primate Brain[link]
    Cheang, L. and Tsao, D., 2017. Cell. DOI: 10.1016/j.cell.2017.05.011
  6. Invariant visual representation by single neurons in the human brain[HTML]
    Quiroga, R., Reddy, L., Kreiman, G., Koch, C. and Fried, I., 2005. Nature. DOI: 10.1038/nature03687
  7. Primary Visual Cortex Represents the Difference Between Past and Present[link]
    Nortmann, N., Rekauzke, S., Onat, S., König, P. and Jancke, D., 2015. Cerebral Cortex, Vol 25(6), pp. 1427-1440. DOI: 10.1093/cercor/bht318
  8. Motion-Dependent Representation of Space in Area MT+[link]
    Gerrit, M., Fischer, J. and Whitney, D., 2013. Neuron. DOI: 10.1016/j.neuron.2013.03.010
  9. Akiyoshi’s Illusion Pages[HTML]
    Kitaoka, A., 2002. Kanzen.
  10. Peripheral drift illusion[link]
    Authors, W., 2017. Wikipedia.
  11. Illusory Motion Reproduced by Deep Neural Networks Trained for Prediction[link]
    Watanabe, E., Kitaoka, A., Sakamoto, K., Yasugi, M. and Tanaka, K., 2018. Frontiers in Psychology, Vol 9, pp. 345. DOI: 10.3389/fpsyg.2018.00345
  12. Sensorimotor Mismatch Signals in Primary Visual Cortex of the Behaving Mouse[link]
    Keller, G., Bonhoeffer, T. and Hübener, M., 2012. Neuron, Vol 74(5), pp. 809 - 815. DOI: https://doi.org/10.1016/j.neuron.2012.03.040
  13. A Sensorimotor Circuit in Mouse Cortex for Visual Flow Predictions[link]
    Leinweber, M., Ward, D.R., Sobczak, J.M., Attinger, A. and Keller, G.B., 2017. Neuron, Vol 95(6), pp. 1420 - 1432.e5. DOI: https://doi.org/10.1016/j.neuron.2017.08.036
  14. The ecology of human fear: survival optimization and the nervous system.[link]
    Mobbs, D., Hagan, C.C., Dalgleish, T., Silston, B. and Prévost, C., 2015. Frontiers in Neuroscience. DOI: 10.3389/fnins.2015.00055
  15. Baseball Icon Design (CC 3.0)[link]
    Sotil, G., 2018. The Noun Project.
  16. Tracking Fastballs[link]
    Hirshon, B., 2013. Science Update Interview.
  17. Reinforcement learning: a survey
    Kaelbling, L.P., Littman, M.L. and Moore, A.W., 1996. Journal of AI research, Vol 4, pp. 237—285.
  18. Introduction to Reinforcement Learning[PDF]
    Sutton, R.S. and Barto, A.G., 1998. MIT Press.
  19. Reinforcement Learning
    Wiering, M. and van Otterlo, M., 2012. Springer.
  20. Learning How the World Works: Specifications for Predictive Networks in Robots and Brains
    Werbos, P.J., 1987. Proceedings of IEEE International Conference on Systems, Man and Cybernetics, N.Y..
  21. David Silver’s Lecture on Integrating Learning and Planning[PDF]
    Silver, D., 2017.
  22. Making the World Differentiable: On Using Self-Supervised Fully Recurrent Neural Networks for Dynamic Reinforcement Learning and Planning in Non-Stationary Environments[PDF]
    Schmidhuber, J., 1990.
  23. An on-line algorithm for dynamic reinforcement learning and planning in reactive environments[link]
    Schmidhuber, J., 1990. 1990 IJCNN International Joint Conference on Neural Networks, pp. 253-258 vol.2. DOI: 10.1109/IJCNN.1990.137723
  24. Reinforcement Learning in Markovian and Non-Markovian Environments[PDF]
    Schmidhuber, J., 1991. Advances in Neural Information Processing Systems 3, pp. 500—506. Morgan-Kaufmann.
  25. The representation of the cumulative rounding error of an algorithm as a Taylor expansion of the local rounding errors
    Linnainmaa, S., 1970.
  26. Gradient Theory of Optimal Flight Paths
    Kelley, H.J., 1960. ARS Journal, Vol 30(10), pp. 947-954.
  27. Applications of advances in nonlinear sensitivity analysis
    Werbos, P.J., 1982. System modeling and optimization, pp. 762—770. Springer.
  28. Deep Reinforcement Learning: A Brief Survey[PDF]
    Arulkumaran, K., Deisenroth, M.P., Brundage, M. and Bharath, A.A., 2017. IEEE Signal Processing Magazine, Vol 34(6), pp. 26-38. DOI: 10.1109/MSP.2017.2743240
  29. Deep Learning in Neural Networks: An Overview
    Schmidhuber, J., 2015. Neural Networks, Vol 61, pp. 85-117. DOI: 10.1016/j.neunet.2014.09.003
  30. A Possibility for Implementing Curiosity and Boredom in Model-building Neural Controllers[PDF]
    Schmidhuber, J., 1990. Proceedings of the First International Conference on Simulation of Adaptive Behavior on From Animals to Animats, pp. 222—227. MIT Press.
  31. On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models[PDF]
    Schmidhuber, J., 2015. ArXiv preprint.
  32. Auto-Encoding Variational Bayes[PDF]
    Kingma, D. and Welling, M., 2013. ArXiv preprint.
  33. Stochastic Backpropagation and Approximate Inference in Deep Generative Models[PDF]
    Jimenez Rezende, D., Mohamed, S. and Wierstra, D., 2014. ArXiv preprint.
  34. ViZDoom: A Doom-based AI Research Platform for Visual Reinforcement Learning[PDF]
    Kempka, M., Wydmuch, M., Runc, G., Toczek, J. and Jaskowski, W., 2016. IEEE Conference on Computational Intelligence and Games, pp. 341—348. IEEE.
  35. DoomTakeCover-v0[link]
    Paquette, P., 2016.
  36. A Neural Representation of Sketch Drawings[link]
    Ha, D. and Eck, D., 2017. ArXiv preprint.
  37. Draw Together with a Neural Network[link]
    Ha, D., Jongejan, J. and Johnson, I., 2017. Google AI Experiments.
  38. Mixture density networks[link]
    Bishop, C.M., 1994. Technical Report. Aston University.
  39. Mixture Density Networks with TensorFlow[link]
    Ha, D., 2015. blog.otoro.net.
  40. Generating sequences with recurrent neural networks[PDF]
    Graves, A., 2013. ArXiv preprint.
  41. Recurrent Neural Network Tutorial for Artists[link]
    Ha, D., 2017. blog.otoro.net.
  42. Experiments in Handwriting with a Neural Network[link]
    Carter, S., Ha, D., Johnson, I. and Olah, C., 2016. Distill. DOI: 10.23915/distill.00004
  43. Evolutionsstrategie: optimierung technischer systeme nach prinzipien der biologischen evolution[link]
    Rechenberg, I., 1973. Frommann-Holzboog.
  44. Numerical Optimization of Computer Models[link]
    Schwefel, H., 1977. John Wiley and Sons, Inc.
  45. A Visual Guide to Evolution Strategies[link]
    Ha, D., 2017. blog.otoro.net.
  46. The CMA Evolution Strategy: A Tutorial[PDF]
    Hansen, N., 2016. ArXiv preprint.
  47. Completely Derandomized Self-Adaptation in Evolution Strategies[PDF]
    Hansen, N. and Ostermeier, A., 2001. Evolutionary Computation, Vol 9(2), pp. 159—195. MIT Press. DOI: 10.1162/106365601750190398
  48. CarRacing-v0[link]
    Klimov, O., 2016.
  49. Self-driving cars in the browser[link]
    Hünermann, J., 2017.
  50. Mar I/O Kart[link]
    Bling, S., 2015.
  51. Using Keras and Deep Deterministic Policy Gradient to play TORCS[HTML]
    Lau, B., 2016.
  52. Car Racing using Reinforcement Learning[PDF]
    Khan, M. and Elibol, O., 2016.
  53. Reinforcement Car Racing with A3C[link]
    Jang, S., Min, J. and Lee, C., 2017.
  54. Deep-Q Learning for Box2D Racecar RL problem.[link]
    Prieur, L., 2017. “GitHub”.
  55. Video Game Exploits[link]
    Wikipedia, A., 2017. Wikipedia.
  56. Action-Conditional Video Prediction using Deep Networks in Atari Games[PDF]
    Oh, J., Guo, X., Lee, H., Lewis, R. and Singh, S., 2015. ArXiv preprint.
  57. Recurrent Environment Simulators[PDF]
    Chiappa, S., Racaniere, S., Wierstra, D. and Mohamed, S., 2017. ArXiv preprint.
  58. PILCO: A Model-Based and Data-Efficient Approach to Policy Search[PDF]
    Deisenroth, M. and Rasmussen, C., 2011. In Proceedings of the International Conference on Machine Learning.
  59. Neural Network Dynamics for Model-Based Deep Reinforcement Learning with Model-Free Fine-Tuning[PDF]
    Nagabandi, A., Kahn, G., Fearing, R. and Levine, S., 2017. ArXiv preprint.
  60. Formal Theory of Creativity, Fun, and Intrinsic Motivation (1990-2010).[HTML]
    Schmidhuber, J., 2010. IEEE Trans. Autonomous Mental Development.
  61. Developmental Robotics, Optimal Artificial Curiosity, Creativity, Music, and the Fine Arts
    Schmidhuber, J., 2006. Connection Science, Vol 18(2), pp. 173—187.
  62. Curious Model-Building Control Systems
    Schmidhuber, J., 1991. In Proc. International Joint Conference on Neural Networks, Singapore, pp. 1458—1463. IEEE.
  63. Curiosity-driven Exploration by Self-supervised Prediction[link]
    Pathak, D., Agrawal, P., A., E. and Darrell, T., 2017. ArXiv preprint.
  64. Intrinsic Motivation Systems for Autonomous Mental Development[PDF]
    Oudeyer, P., Kaplan, F. and Hafner, V., 2007. Trans. Evol. Comp. IEEE Press. DOI: 10.1109/TEVC.2006.890271
  65. Reinforcement driven information acquisition in nondeterministic environments
    Schmidhuber, J., Storck, J. and Hochreiter, S., 1994.
  66. Information-seeking, curiosity, and attention: computational and neural mechanisms[PDF]
    Gottlieb, J., Oudeyer, P., Lopes, M. and Baranes, A., 2013. Cell. DOI: 10.1016/j.tics.2013.09.001
  67. Abandoning objectives: Evolution through the search for novelty alone[link]
    Lehman, J. and Stanley, K., 2011. Evolutionary Computation, Vol 19(2), pp. 189—223. M I T Press.
  68. Memory Consolidation[link]
    Authors, W., 2017. Wikipedia.
  69. Replay Comes of Age[link]
    Foster, D.J., 2017. Annual Review of Neuroscience, Vol 40(1), pp. 581-602. DOI: 10.1146/annurev-neuro-072116-031538
  70. Intrinsic Motivation and Automatic Curricula via Asymmetric Self-Play[PDF]
    Sukhbaatar, S., Lin, Z., Kostrikov, I., Synnaeve, G., Szlam, A. and Fergus, R., 2017. ArXiv preprint.
  71. Emergent Complexity via Multi-Agent Competition[PDF]
    Bansal, T., Pachocki, J., Sidor, S., Sutskever, I. and Mordatch, I., 2017. ArXiv preprint.
  72. Continuous Adaptation via Meta-Learning in Nonstationary and Competitive Environments[PDF]
    Al-Shedivat, M., Bansal, T., Burda, Y., Sutskever, I., Mordatch, I. and Abbeel, P., 2017. ArXiv preprint.
  73. PowerPlay: Training an Increasingly General Problem Solver by Continually Searching for the Simplest Still Unsolvable Problem[link]
    Schmidhuber, J., 2013. Frontiers in Psychology, Vol 4, pp. 313. DOI: 10.3389/fpsyg.2013.00313
  74. First Experiments with PowerPlay[PDF]
    Srivastava, R., Steunebrink, B. and Schmidhuber, J., 2012. ArXiv preprint.
  75. Optimal Ordered Problem Solver[PDF]
    Schmidhuber, J., 2002. ArXiv preprint.
  76. A Dual Back-Propagation Scheme for Scalar Reinforcement Learning
    Munro, P.W., 1987. Proceedings of the Ninth Annual Conference of the Cognitive Science Society, Seattle, WA, pp. 165-176.
  77. Dynamic Reinforcement Driven Error Propagation Networks with Application to Game Playing
    Robinson, T. and Fallside, F., 1989. CogSci 89.
  78. Neural Networks for Control and System Identification
    Werbos, P.J., 1989. Proceedings of IEEE/CDC Tampa, Florida.
  79. The truck backer-upper: An example of self learning in neural networks
    Nguyen, N. and Widrow, B., 1989. Proceedings of the International Joint Conference on Neural Networks, pp. 357-363. IEEE Press.
  80. Lecture Slides on PILCO[PDF]
    Duvenaud, D., 2016. CSC 2541 Course at University of Toronto.
  81. Data-Efficient Reinforcement Learning in Continuous-State POMDPs[PDF]
    McAllister, R. and Rasmussen, C., 2016. ArXiv preprint.
  82. Improving PILCO with Bayesian Neural Network Dynamics Models[PDF]
    Gal, Y., McAllister, R. and Rasmussen, C., 2016. ICML Workshop on Data-Efficient Machine Learning.
  83. Learning and Policy Search in Stochastic Dynamical Systems with Bayesian Neural Networks[PDF]
    Depeweg, S., Hernandez-Lobato, J., Doshi-Velez, F. and Udluft, S., 2016. ArXiv preprint.
  84. A Benchmark Environment Motivated by Industrial Control Problems[PDF]
    Hein, D., Depeweg, S., Tokic, M., Udluft, S., Hentschel, A., Runkler, T. and Sterzing, V., 2017. ArXiv preprint.
  85. Learning to Generate Artificial Fovea Trajectories for Target Detection[PDF]
    Schmidhuber, J. and Huber, R., 1991. International Journal of Neural Systems, Vol 2(1-2), pp. 125—134. DOI: 10.1142/S012906579100011X
  86. Learning deep dynamical models from image pixels[PDF]
    Wahlström, N., Schön, T. and Deisenroth, M., 2014. ArXiv preprint.
  87. From Pixels to Torques: Policy Learning with Deep Dynamical Models[PDF]
    Wahlström, N., Schön, T. and Deisenroth, M., 2015. ArXiv preprint.
  88. Deep Spatial Autoencoders for Visuomotor Learning[PDF]
    Finn, C., Tan, X., Duan, Y., Darrell, T., Levine, S. and Abbeel, P., 2015. ArXiv preprint.
  89. Embed to Control: A Locally Linear Latent Dynamics Model for Control from Raw Images[PDF]
    Watter, M., Springenberg, J., Boedecker, J. and Riedmiller, M., 2015. ArXiv preprint.
  90. Model-Based RL Lecture at Deep RL Bootcamp 2017[link]
    Finn, C., 2017.
  91. Game Engine Learning from Video[link]
    Matthew Guzdial, B.L., 2017. Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI-17, pp. 3707—3713. DOI: 10.24963/ijcai.2017/518
  92. Learning to Act by Predicting the Future[PDF]
    Dosovitskiy, A. and Koltun, V., 2016. ArXiv preprint.
  93. Hallucination with Recurrent Neural Networks[link]
    Graves, A., 2015.
  94. Unsupervised Learning of Disentangled Representations from Video[PDF]
    Denton, E. and Birodkar, V., 2017. ArXiv preprint.
  95. The Predictron: End-To-End Learning and Planning[PDF]
    Silver, D., van Hasselt, H., Hessel, M., Schaul, T., Guez, A., Harley, T., Dulac-Arnold, G., Reichert, D., Rabinowitz, N., Barreto, A. and Degris, T., 2016. ArXiv preprint.
  96. Imagination-Augmented Agents for Deep Reinforcement Learning[PDF]
    Weber, T., Racanière, S., Reichert, D., Buesing, L., Guez, A., Rezende, D., Badia, A., Vinyals, O., Heess, N., Li, Y., Pascanu, R., Battaglia, P., Silver, D. and Wierstra, D., 2017. ArXiv preprint.
  97. Visual Interaction Networks[PDF]
    Watters, N., Tacchetti, A., Weber, T., Pascanu, R., Battaglia, P. and Zoran, D., 2017. ArXiv preprint.
  98. PathNet: Evolution Channels Gradient Descent in Super Neural Networks[PDF]
    Fernando, C., Banarse, D., Blundell, C., Zwols, Y., Ha, D., Rusu, A., Pritzel, A. and Wierstra, D., 2017. ArXiv preprint.
  99. Evolution Strategies as a Scalable Alternative to Reinforcement Learning[PDF]
    Salimans, T., Ho, J., Chen, X., Sidor, S. and Sutskever, I., 2017. ArXiv preprint.
  100. Evolving Stable Strategies[link]
    Ha, D., 2017. blog.otoro.net.
  101. Welcoming the Era of Deep Neuroevolution[link]
    Stanley, K. and Clune, J., 2017. Uber AI Research.
  102. Playing Atari with Deep Reinforcement Learning[PDF]
    Mnih, V., Kavukcuoglu, K., Silver, D., Graves, A., Antonoglou, I., Wierstra, D. and Riedmiller, M., 2013. ArXiv preprint.
  103. Evolving Neural Networks Through Augmenting Topologies[link]
    Stanley, K.O. and Miikkulainen, R., 2002. Evolutionary Computation, Vol 10(2), pp. 99-127.
  104. Accelerated Neural Evolution Through Cooperatively Coevolved Synapses[PDF]
    Gomez, F., Schmidhuber, J. and Miikkulainen, R., 2008. Journal of Machine Learning Research, Vol 9, pp. 937—965. JMLR.org.
  105. Co-evolving Recurrent Neurons Learn Deep Memory POMDPs[PDF]
    Gomez, F. and Schmidhuber, J., 2005. Proceedings of the 7th Annual Conference on Genetic and Evolutionary Computation, pp. 491—498. ACM. DOI: 10.1145/1068009.1068092
  106. Autonomous Evolution of Topographic Regularities in Artificial Neural Networks[PDF]
    Gauci, J. and Stanley, K.O., 2010. Neural Computation, Vol 22(7), pp. 1860—1898. MIT Press. DOI: 10.1162/neco.2010.06-09-1042
  107. Parameter-exploring policy gradients[link]
    Sehnke, F., Osendorfer, C., Ruckstieb, T., Graves, A., Peters, J. and Schmidhuber, J., 2010. Neural Networks, Vol 23(4), pp. 551—559. DOI: 10.1016/j.neunet.2009.12.004
  108. Evolving Neural Networks[PDF]
    Miikkulainen, R., 2013. IJCNN.
  109. Evolving Large-scale Neural Networks for Vision-based Reinforcement Learning[HTML]
    Koutnik, J., Cuccu, G., Schmidhuber, J. and Gomez, F., 2013. Proceedings of the 15th Annual Conference on Genetic and Evolutionary Computation, pp. 1061—1068. ACM. DOI: 10.1145/2463372.2463509
  110. A Neuroevolution Approach to General Atari Game Playing[link]
    Hausknecht, M., Lehman, J., Miikkulainen, R. and Stone, P., 2013. IEEE Transactions on Computational Intelligence and AI in Games.
  111. Neuro-Visual Control in the Quake II Environment[PDF]
    Parker, M. and Bryant, B., 2012. IEEE Transactions on Computational Intelligence and AI in Games.
  112. Autoencoder-augmented Neuroevolution for Visual Doom Playing[PDF]
    Alvernaz, S. and Togelius, J., 2017. ArXiv preprint.
  113. Cortical interneurons that specialize in disinhibitory control[link]
    Pi, H., Hangya, B., Kvitsiani, D., Sanders, J., Huang, Z. and Kepecs, A., 2013. Nature. DOI: 10.1038/nature12676
  114. SE3-Pose-Nets: Structured Deep Dynamics Models for Visuomotor Planning and Control[PDF]
    Byravan, A., Leeb, F., Meier, F. and Fox, D., 2017. ArXiv preprint.
  115. Long short-term memory[PDF]
    Hochreiter, S. and Schmidhuber, J., 1997. Neural Computation. MIT Press.
  116. Learning to Forget: Continual Prediction with LSTM[PDF]
    Gers, F., Schmidhuber, J. and Cummins, F., 2000. Neural Computation, Vol 12(10), pp. 2451—2471. MIT Press. DOI: 10.1162/089976600300015015
  117. Nanoconnectomic upper bound on the variability of synaptic plasticity[link]
    Bartol, T.M., Bromer, C., Kinney, J., Chirillo, M.A., Bourne, J.N., Harris, K.M. and Sejnowski, T.J., 2015. eLife Sciences Publications, Ltd. DOI: 10.7554/eLife.10778
  118. Connectionist models of recognition memory: constraints imposed by learning and forgetting functions.
    Ratcliff, R.M., 1990. Psychological review, Vol 97 2, pp. 285-308.
  119. Catastrophic interference in connectionist networks: Can It Be predicted, can It be prevented?[PDF]
    French, R.M., 1994. Advances in Neural Information Processing Systems 6, pp. 1176—1177. Morgan-Kaufmann.
  120. Overcoming catastrophic forgetting in neural networks[PDF]
    Kirkpatrick, J., Pascanu, R., Rabinowitz, N., Veness, J., Desjardins, G., Rusu, A.M., Quan, J., Ramalho, T., Grabska-Barwinska, A., Hassabis, D., Clopath, C., Kumaran, D. and Hadsell, R., 2016. ArXiv preprint.
  121. Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer[PDF]
    Shazeer, N., Mirhoseini, A., Maziarz, K., Davis, A., Le, Q., Hinton, G. and Dean, J., 2017. ArXiv preprint.
  122. HyperNetworks[PDF]
    Ha, D., Dai, A. and Le, Q., 2016. ArXiv preprint.
  123. Language Modeling with Recurrent Highway Hypernetworks[PDF]
    Suarez, J., 2017. Advances in Neural Information Processing Systems 30, pp. 3269—3278. Curran Associates, Inc.
  124. WaveNet: A Generative Model for Raw Audio[PDF]
    van den Oord, A., Dieleman, S., Zen, H., Simonyan, K., Vinyals, O., Graves, A., Kalchbrenner, N., Senior, A. and Kavukcuoglu, K., 2016. ArXiv preprint.
  125. Attention Is All You Need[PDF]
    Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A., Kaiser, L. and Polosukhin, I., 2017. ArXiv preprint.
  126. Generative Temporal Models with Memory[PDF]
    Gemici, M., Hung, C., Santoro, A., Wayne, G., Mohamed, S., Rezende, D., Amos, D. and Lillicrap, T., 2017. ArXiv preprint.
  127. One Big Net For Everything[PDF]
    Schmidhuber, J., 2018. Preprint arXiv:1802.08864 [cs.AI].
  128. Learning Complex, Extended Sequences Using the Principle of History Compression
    Schmidhuber, J., 1992. Neural Computation, Vol 4(2), pp. 234-242.
X